17 research outputs found

    Implementation of a low-cost, web-based, multi-component training for trauma-focused cognitive-behavioral therapy

    Get PDF
    Although continuing education appears to be a promising strategy for closing the research-to-practice gap, effective trainings that result in clinician behavior change remain expensive and largely inaccessible. The current study evaluated a low-cost, multicomponent, web-based training for Trauma-Focused Cognitive-Behavioral Therapy (TF-CBT). Clinician members of a practice-based research network were recruited via email and randomized to either a training group (TG; N=89 assigned) or control group (CG; N=74 assigned), with half of each group randomized to receive incentives for completion. The TG was immediately offered the training; the CG was offered the same training after 6 months. Clinicians completed assessments at baseline (pre-training), 6-months, and 12-months covering (a) completion of training components, (b) knowledge, (c) use of TF-CBT, and (d) for a subset of clinicians (N=34), TF-CBT fidelity. There were no significant between-group differences on TF-CBT knowledge and strategy use at 6 months, although significant differences in overall TF-CBT skill were found. There was also considerable variability in the extent of training completed. We found significant positive associations between extent of training completed and clinician knowledge, use, and fidelity in TF-CBT. A multiple regression showed that previous TF-BT training, clinician attitudes towards evidence-based practices, and clinician age predicted training completion. Implications for web-based trainings and implementation science are discussed.Includes bibliographical reference

    Implementing measurement-based care (iMBC) for depression in community mental health: a dynamic cluster randomized trial study protocol

    Get PDF
    BACKGROUND: Measurement-based care is an evidence-based practice for depression that efficiently identifies treatment non-responders and those who might otherwise deteriorate [1]. However, measurement-based care is underutilized in community mental health with data suggesting fewer than 20 % of behavioral health providers using this practice to inform treatment. It remains unclear whether standardized or tailored approaches to implementation are needed to optimize measurement-based care fidelity and penetration. Moreover, there is some suggestion that prospectively tailored interventions that are designed to fit the dynamic context may optimize public health impact, though no randomized trials have yet tested this notion [2]. This study will address the following three aims: (1) To compare the effect of standardized versus tailored MBC implementation on clinician-level and client-level outcomes; (2) To identify contextual mediators of MBC fidelity; and (3) To explore the impact of MBC fidelity on client outcomes. METHODS/DESIGN: This study is a dynamic cluster randomized trial of standardized versus tailored measurement-based care implementation in Centerstone, the largest provider of community-based mental health services in the USA. This prospective, mixed methods implementation-effectiveness hybrid design allows for evaluation of the two conditions on both clinician-level (e.g., MBC fidelity) and client-level (depression symptom change) outcomes. Central to this investigation is the focus on identifying contextual factors (e.g., attitudes, resources, process, etc.) that mediate MBC fidelity and optimize client outcomes. DISCUSSION: This study will contribute generalizable and practical strategies for implementing systematic symptom monitoring to inform and enhance behavioral healthcare. TRIAL REGISTRATION: Clinicaltrials.gov NCT02266134

    Proceedings of the 3rd Biennial Conference of the Society for Implementation Research Collaboration (SIRC) 2015: advancing efficient methodologies through community partnerships and team science

    Get PDF
    It is well documented that the majority of adults, children and families in need of evidence-based behavioral health interventionsi do not receive them [1, 2] and that few robust empirically supported methods for implementing evidence-based practices (EBPs) exist. The Society for Implementation Research Collaboration (SIRC) represents a burgeoning effort to advance the innovation and rigor of implementation research and is uniquely focused on bringing together researchers and stakeholders committed to evaluating the implementation of complex evidence-based behavioral health interventions. Through its diverse activities and membership, SIRC aims to foster the promise of implementation research to better serve the behavioral health needs of the population by identifying rigorous, relevant, and efficient strategies that successfully transfer scientific evidence to clinical knowledge for use in real world settings [3]. SIRC began as a National Institute of Mental Health (NIMH)-funded conference series in 2010 (previously titled the “Seattle Implementation Research Conference”; $150,000 USD for 3 conferences in 2011, 2013, and 2015) with the recognition that there were multiple researchers and stakeholdersi working in parallel on innovative implementation science projects in behavioral health, but that formal channels for communicating and collaborating with one another were relatively unavailable. There was a significant need for a forum within which implementation researchers and stakeholders could learn from one another, refine approaches to science and practice, and develop an implementation research agenda using common measures, methods, and research principles to improve both the frequency and quality with which behavioral health treatment implementation is evaluated. SIRC’s membership growth is a testament to this identified need with more than 1000 members from 2011 to the present.ii SIRC’s primary objectives are to: (1) foster communication and collaboration across diverse groups, including implementation researchers, intermediariesi, as well as community stakeholders (SIRC uses the term “EBP champions” for these groups) – and to do so across multiple career levels (e.g., students, early career faculty, established investigators); and (2) enhance and disseminate rigorous measures and methodologies for implementing EBPs and evaluating EBP implementation efforts. These objectives are well aligned with Glasgow and colleagues’ [4] five core tenets deemed critical for advancing implementation science: collaboration, efficiency and speed, rigor and relevance, improved capacity, and cumulative knowledge. SIRC advances these objectives and tenets through in-person conferences, which bring together multidisciplinary implementation researchers and those implementing evidence-based behavioral health interventions in the community to share their work and create professional connections and collaborations

    Longitudinal, naturalistic study of training and support for implementation of evidence-based youth mental health practices among community providers

    Get PDF
    Ongoing training and implementation support for mental health (MH) providers may help to bridge the often-noted research-to-practice gap in community MH care. However, MH providers typically have limited ability to access training or implementation support in evidence-based practices (EBPs). To address this need, the current study describes the reach and impact of a county-wide youth MH initiative aimed at increasing youths' and families' access to effective youth MH services by providing free EBP training and implementation support to MH service providers. Specifically, the initiative offered 1) formal workshops focused on specific EBPs, 2) a biweekly learning community, 3) individual case consultation or supervision, and 4) a confidential online session-bysession clinical feedback system. To evaluate the training initiative, we employed a mixed methods approach within a naturalistic, longitudinal design. Providers (N = 717) were asked to complete an initiatrainings and on a yearly basis thereafter (n = 255 completed at least one follow-up assessment). Measures included demographics, clinical practice information, self-reported confidence in treating youths with various problem types, organizational implementation climate, and EBP knowledge, attitudes, and practices. Additionally, we completed individual, semi-structured, qualitative interviews with a stratified purposeful sampling of providers (n = 13) based on level of participation in the training. While the training initiative had high reach, far fewer providers ultimately engaged in training. Results suggested providers who were trainees, who had higher baseline knowledge of EBPs, who used common evidence-based strategies more extensively, and who used other therapy strategies less extensively, engaged in more training. A provider's stage of career (i.e., being a pre-service trainee or post-graduate provider) consistently showed differences in training outcomes, with trainees haviEBPs, using common evidence-based strategies and other therapy strategies less extensively, and self-reporting less confidence in their effectiveness than post-graduate providers. Contrary to hypotheses, quantity and method of training were less consistently associated with change in training outcomes. Rapid qualitative analysis of semi-structured interviews complemented and expanded upon the quantitative results, illuminating provider, organizational, system, practical, and training activity-specific barriers and facilitators to training engagement and EBP implementation. Implications for future research and training initiatives are discussed in light of these findings.Includes bibliographical references

    A methodology for generating a tailored implementation blueprint: an exemplar from a youth residential setting

    No full text
    Abstract Background Tailored implementation approaches are touted as more likely to support the integration of evidence-based practices. However, to our knowledge, few methodologies for tailoring implementations exist. This manuscript will apply a model-driven, mixed methods approach to a needs assessment to identify the determinants of practice, and pilot a modified conjoint analysis method to generate an implementation blueprint using a case example of a cognitive behavioral therapy (CBT) implementation in a youth residential center. Methods Our proposed methodology contains five steps to address two goals: (1) identify the determinants of practice and (2) select and match implementation strategies to address the identified determinants (focusing on barriers). Participants in the case example included mental health therapists and operations staff in two programs of Wolverine Human Services. For step 1, the needs assessment, they completed surveys (clinician N = 10; operations staff N = 58; other N = 7) and participated in focus groups (clinician N = 15; operations staff N = 38) guided by the domains of the Framework for Diffusion [1]. For step 2, the research team conducted mixed methods analyses following the QUAN + QUAL structure for the purpose of convergence and expansion in a connecting process, revealing 76 unique barriers. Step 3 consisted of a modified conjoint analysis. For step 3a, agency administrators prioritized the identified barriers according to feasibility and importance. For step 3b, strategies were selected from a published compilation and rated for feasibility and likelihood of impacting CBT fidelity. For step 4, sociometric surveys informed implementation team member selection and a meeting was held to identify officers and clarify goals and responsibilities. For step 5, blueprints for each of pre-implementation, implementation, and sustainment phases were generated. Results Forty-five unique strategies were prioritized across the 5 years and three phases representing all nine categories. Conclusions Our novel methodology offers a relatively low burden collaborative approach to generating a plan for implementation that leverages advances in implementation science including measurement, models, strategy compilations, and methods from other fields

    Implementing measurement based care in community mental health: a description of tailored and standardized methods

    No full text
    Abstract Objective Although tailored implementation methods are touted as superior to standardized, few researchers have directly compared the two and little guidance regarding the specific details of each method exist. Our study compares these methods in a dynamic cluster randomized trial seeking to optimize implementation of measurement based care (MBC) for depression in community behavioral health. This specific manuscript provides a detailed, replicable account of the components of each multi-faceted implementation method. Results The standardized best practice method includes training, consultation, a clinical guideline, and electronic health record enhancements with the goal to optimize the delivery of MBC with fidelity. Conversely, the tailored, customized and collaborative method is informed by recent implementation science advancements and begins with a needs assessment, followed by tailored training that feeds back barriers data to clinicians, the formation of an implementation team, a clinician-driven clinic-specific guideline, and the use of fidelity data to inform implementation team activities; the goal of the tailored condition is to ensure the intervention and implementation strategies address unique factors of the context. The description of these methods will inform others seeking to implement MBC, as well as those planning to use standardized or tailored implementation methods for interventions beyond behavioral health

    Initial Validation of a Computerized Adaptive Test for Substance Use Disorder Identification in Adolescents

    No full text
    Computerized adaptive tests (CATs) are highly efficient assessment tools that couple low patient and clinician time burden with high diagnostic accuracy. A CAT for substance use disorders (CAT-SUD-E) has been validated in adult populations but has yet to be tested in adolescents. The purpose of this study was to perform initial evaluation of the K-CAT-SUD-E (i.e., Kiddy-CAT-SUD-E) in an adolescent sample compared to a gold-standard diagnostic interview. Adolescents (N = 156; aged 11–17) with diverse substance use histories completed the K-CAT-SUD-E electronically and the substance related disorders portion of a clinician-conducted diagnostic interview (K-SADS) via tele-videoconferencing platform. The K-CAT-SUD-E assessed both current and lifetime overall SUD and substance-specific diagnoses for nine substance classes. Using the K-CAT-SUD-E continuous severity score and diagnoses to predict the presence of any K-SADS SUD diagnosis, the classification accuracy ranged from excellent for current SUD (AUC = 0.89, 95% CI = 0.81, 0.95) to outstanding (AUC = 0.93, 95% CI = 0.82, 0.97) for lifetime SUD. Regarding current substance-specific diagnoses, the classification accuracy was excellent for alcohol (AUC = 0.82), cannabis (AUC = 0.83) and nicotine/tobacco (AUC = 0.90). For lifetime substance-specific diagnoses, the classification accuracy ranged from excellent (e.g., opioids, AUC = 0.84) to outstanding (e.g., stimulants, AUC = 0.96). K-CAT-SUD-E median completion time was 4 min 22 s compared to 45 min for the K-SADS. This study provides initial support for the K-CAT-SUD-E as a feasible accurate diagnostic tool for assessing SUDs in adolescents. Future studies should further validate the K-CAT-SUD-E in a larger sample of adolescents and examine its acceptability, feasibility, and scalability in youth-serving settings.</p

    Cognitive Behavioural Therapy Competency: Pilot Data from a Comparison of Multiple Perspectives

    No full text
    Background: Measurement of cognitive behavioural therapy (CBT) competency is often resource intensive. A popular emerging alternative to independent observers’ ratings is using other perspectives for rating competency. Aims: This pilot study compared ratings of CBT competency from four perspectives – patient, therapist, supervisor and independent observer using the Cognitive Therapy Scale (CTS). Method: Patients (n = 12, 75% female, mean age 30.5 years) and therapists (n = 5, female, mean age 26.6 years) completed the CTS after therapy sessions, and clinical supervisor and independent observers rated recordings of the same session. Results: Analyses of variance revealed that therapist average CTS competency ratings were not different from supervisor ratings, and supervisor ratings were not different from independent observer ratings; however, therapist ratings were higher than independent observer ratings and patient ratings were higher than all other raters. Conclusions: Raters differed in competency ratings. Implications for potential use and adaptation of CBT competency measurement methods to enhance training and implementation are discussed
    corecore